Microsoft wants to make generative AI applications more secure and trustworthy with new tools in Azure AI that help product builders deal with prompt injection attacks, hallucinations, and content risks. These include Prompt Shields to detect and block direct and indirect prompt injection attacks, Groundedness Detection to identify text-based hallucinations, and Safety System Message Templates to help mitigate misuse and harmful content. Azure AI Studio lets organizations measure an AI application's susceptibility to many threats.
Wednesday, April 3, 2024